81 research outputs found

    Welfare Impacts of BSE-Driven Trade Bans

    Get PDF
    There is often a need to respond quickly to assess the likely implications of policy changes. Here, an equilibrium displacement model is adapted to study international bans on U.S. beef. An equilibrium displacement model offers a convenient way of quickly predicting the effects of supply and demand shocks. The equilibrium displacement model used here has an international sector, which allows the study of issues that past models with only a domestic sector could not. The estimated welfare loss of U.S. beef producers, due to both Japanese and South Korean bans after the discovery of bovine spongiform encephalopathy (BSE) in the United States, is $565.31 million.equilibrium displacement, international trade, meat, trade ban, welfare, Marketing,

    Welfare Implications of Selected Supply and Demand Shocks on Producers and Marketers of U.S. Meats

    Get PDF
    An equilibrium displacement model is developed and used to estimate the welfare impacts of government and industry-funded promotion programs, country of origin labeling (COOL), and the disease-driven, international bans on U.S. beef. The model goes beyond past studies by including the U.S. domestic market and both U.S. meat imports and exports, with meats differentiated by source of origin. The results indicate that while the benefits from beef and pork promotions are higher, the negative impacts of COOL are lower in a model with international trade than in a model without trade. International bans on U.S. beef decrease the welfare of producers and marketers of U.S. beef.beef ban, country of origin, equilibrium displacement model, pork, poultry, promotion, Demand and Price Analysis,

    Randomized Revenue Monotone Mechanisms for Online Advertising

    Full text link
    Online advertising is the main source of revenue for many Internet firms. A central component of online advertising is the underlying mechanism that selects and prices the winning ads for a given ad slot. In this paper we study designing a mechanism for the Combinatorial Auction with Identical Items (CAII) in which we are interested in selling kk identical items to a group of bidders each demanding a certain number of items between 11 and kk. CAII generalizes important online advertising scenarios such as image-text and video-pod auctions [GK14]. In image-text auction we want to fill an advertising slot on a publisher's web page with either kk text-ads or a single image-ad and in video-pod auction we want to fill an advertising break of kk seconds with video-ads of possibly different durations. Our goal is to design truthful mechanisms that satisfy Revenue Monotonicity (RM). RM is a natural constraint which states that the revenue of a mechanism should not decrease if the number of participants increases or if a participant increases her bid. [GK14] showed that no deterministic RM mechanism can attain PoRM of less than ln⁥(k)\ln(k) for CAII, i.e., no deterministic mechanism can attain more than 1ln⁥(k)\frac{1}{\ln(k)} fraction of the maximum social welfare. [GK14] also design a mechanism with PoRM of O(ln⁥2(k))O(\ln^2(k)) for CAII. In this paper, we seek to overcome the impossibility result of [GK14] for deterministic mechanisms by using the power of randomization. We show that by using randomization, one can attain a constant PoRM. In particular, we design a randomized RM mechanism with PoRM of 33 for CAII

    Organic and Conventional Vegetable Production in Oklahoma

    Get PDF
    This study compares he profitability and risk related to conventional and organic vegetable production systems A linear programming model was used to find the optimal mix of vegetables in both production systems. And a target MOTAD (minimization of total absolute deviation) model was used to perform risk analysis in both organic and conventional production systemsCrop Production/Industries, Research Methods/ Statistical Methods,

    Simultaneous Ad Auctions

    Get PDF
    We consider a model with two simultaneous VCG ad auctions A and B where each advertiser chooses to participate in a single ad auction. We prove the existence and uniqueness of a symmetric equilibrium in that model. Moreover, when the click rates in A are pointwise higher than those in B, we prove that the expected revenue in A is greater than the expected revenue in B in this equilibrium. In contrast, we show that this revenue ranking does not hold when advertisers can participate in both auctions

    Stable Matching with Uncertain Pairwise Preferences

    Get PDF

    Event-based Asynchronous Sparse Convolutional Networks

    Full text link
    Event cameras are bio-inspired sensors that respond to per-pixel brightness changes in the form of asynchronous and sparse "events". Recently, pattern recognition algorithms, such as learning-based methods, have made significant progress with event cameras by converting events into synchronous dense, image-like representations and applying traditional machine learning methods developed for standard cameras. However, these approaches discard the spatial and temporal sparsity inherent in event data at the cost of higher computational complexity and latency. In this work, we present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output, thus directly leveraging the intrinsic asynchronous and sparse nature of the event data. We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks without sacrificing accuracy. In addition, our framework has several desirable characteristics: (i) it exploits spatio-temporal sparsity of events explicitly, (ii) it is agnostic to the event representation, network architecture, and task, and (iii) it does not require any train-time change, since it is compatible with the standard neural networks' training process. We thoroughly validate the proposed framework on two computer vision tasks: object detection and object recognition. In these tasks, we reduce the computational complexity up to 20 times with respect to high-latency neural networks. At the same time, we outperform state-of-the-art asynchronous approaches up to 24% in prediction accuracy

    Ensemble-based multi-filter feature selection method for DDoS detection in cloud computing

    Get PDF
    Widespread adoption of cloud computing has increased the attractiveness of such services to cybercriminals. Distributed denial of service (DDoS) attacks targeting the cloud’s bandwidth, services and resources to render the cloud unavailable to both cloud providers, and users are a common form of attacks. In recent times, feature selection has been identified as a pre-processing phase in cloud DDoS attack defence which can potentially increase classification accuracy and reduce computational complexity by identifying important features from the original dataset during supervised learning. In this work, we propose an ensemble-based multi-filter feature selection method that combines the output of four filter methods to achieve an optimum selection. We then perform an extensive experimental evaluation of our proposed method using intrusion detection benchmark dataset, NSL-KDD and decision tree classifier. The findings show that our proposed method can effectively reduce the number of features from 41 to 13 and has a high detection rate and classification accuracy when compared to other classification techniques

    Preference elicitation in matching markets via interviews: a study of offline benchmarks (extended abstract)

    Get PDF
    In this paper we study two-sided matching markets in which the participants do not fully know their preferences and need to go through some costly deliberation process in order to learn their preferences. We assume that such deliberations are carried out via interviews, thus the problem is to find a good strategy for interviews to be carried out in order to minimize their use, whilst leading to a stable matching. One way to evaluate the performance of an interview strategy is to compare it against a naĂŻve algorithm that conducts all interviews. We argue however that a more meaningful comparison would be against an optimal offline algorithm that has access to agents' preference orderings under complete information. We show that, unless P=NP, no offline algorithm can compute the optimal interview strategy in polynomial time. If we are additionally aiming for a particular stable matching, we provide restricted settings under which efficient optimal offline algorithms exist.</p

    Pareto Optimal Allocation under Uncertain Preferences

    No full text
    • 

    corecore